减少的牵引力限制了移动机器人系统抵抗或施加大型外部负载的能力,例如拉紧有效载荷。一种简单且通用的解决方案是将束缚在天然发生的物体周围,以利用卡普斯坦效应并呈指数放大的固定力。实验表明,理想化的Capstan模型解释了对常见不规则室外物体(树木,岩石,柱子)经历的力放大。适用于可变环境条件,这种指数放大方法可以串联或与机器人团队并行利用单个或多个capstan对象。这种适应性允许一系列潜在配置,对于当对象无法完全包围或抓住时,特别有用。这些原则已通过移动平台证明(1)控制有效载荷的降低和逮捕,(2)以实现有效载荷的平面控制,以及(3)充当更大范围平台的锚点。我们显示了一个简单的系绳,包裹在沙子上的浅石头上,放大了低牵引力平台的持有力量,最多可达774倍。
translated by 谷歌翻译
陆地动物和机器人易于在复杂地形的快速运动过程中翻转。然而,与像昆虫这样的小动物相比,小型机器人能够从颠倒的方向自置于颠倒的方向。灵感来自翅膀的盘状蟑螂,我们设计了一种新的机器人,通过推动地面,将其翅膀打开到自权利。我们使用该机器人来系统地测试自右转性能如何取决于机翼开启幅度,速度和不对称性,并建模了运动学和精力充沛的要求如何取决于机翼形状和主体/机翼质量分布。我们发现,机器人自我权利使用动能动态地克服潜在的能量屏障,更大且更快的对称翼开度增加了自右转性能,并且当机翼开口较小时,开口翼的开口翼增加了右转概率。我们的结果表明,盘状蟑螂的翅膀自职是一种充满活力的机动。虽然薄薄的轻质蟑螂和我们的机器人的轻量级翅膀是充满活力的次优,与高大的沉重的,但它们的开翼的能力将它们节省了大量的能量,与否如果他们有静态炮弹。类似于生物外观,我们的研究为地球机器人提供了概念,以利用现有的形态以克服新的机器人挑战。
translated by 谷歌翻译
自动生物医学图像分析的领域至关重要地取决于算法验证的可靠和有意义的性能指标。但是,当前的度量使用通常是不明智的,并且不能反映基本的域名。在这里,我们提出了一个全面的框架,该框架指导研究人员以问题意识的方式选择绩效指标。具体而言,我们专注于生物医学图像分析问题,这些问题可以解释为图像,对象或像素级别的分类任务。该框架首先编译域兴趣 - 目标结构 - ,数据集和算法与输出问题相关的属性的属性与问题指纹相关,同时还将其映射到适当的问题类别,即图像级分类,语义分段,实例,实例细分或对象检测。然后,它指导用户选择和应用一组适当的验证指标的过程,同时使他们意识到与个人选择相关的潜在陷阱。在本文中,我们描述了指标重新加载推荐框架的当前状态,目的是从图像分析社区获得建设性的反馈。当前版本是在由60多个图像分析专家的国际联盟中开发的,将在社区驱动的优化之后公开作为用户友好的工具包提供。
translated by 谷歌翻译
在各种条件下行走期间关节阻抗的知识与临床决策以及机器人步态培训师,腿部假体,腿矫形器和可穿戴外骨骼的发展相关。虽然步行过程中的脚踝阻抗已经通过实验评估,但尚未识别步行期间的膝盖和髋关节阻抗。在这里,我们开发并评估了下肢扰动器,以识别跑步机行走期间髋关节,膝关节和踝关节阻抗。下肢扰动器(Loper)由致动器组成,致动器通过杆连接到大腿。 Loper允许将力扰动施加到自由悬挂的腿上,同时站立在对侧腿上,带宽高达39Hz。在以最小的阻抗模式下行走时,Loper和大腿之间的相互作用力低(<5N),并且对行走图案的效果小于正常行走期间的对象内变异性。使用摆动腿动力学的非线性多体动力学模型,在摆动阶段在速度为0.5米/秒的速度的九个受试者期间估计臀部,膝关节和踝关节阻抗。所识别的模型能够预测实验反应,因为分别占髋部,膝关节和踝部的平均方差为99%,96%和77%。对受试者刚度的平均分别在34-66nm / rad,0-3.5nm / rad,0-3.5nm / rad和2.5-24nm / rad的三个时间点之间变化,分别用于臀部,膝部和踝关节。阻尼分别在1.9-4.6 nms / rad,0.02-0.14 nms / rad和0.2-2.4 nms / rad的0.02-0.14 nms / rad供应到0.2-2.4nms / rad。发达的洛普勒对不受干扰的行走模式具有可忽略的影响,并且允许在摆动阶段识别臀部,膝关节和踝关节阻抗。
translated by 谷歌翻译
尽管自动图像分析的重要性不断增加,但最近的元研究揭示了有关算法验证的主要缺陷。性能指标对于使用的自动算法的有意义,客观和透明的性能评估和验证尤其是关键,但是在使用特定的指标进行给定的图像分析任务时,对实际陷阱的关注相对较少。这些通常与(1)无视固有的度量属性,例如在存在类不平衡或小目标结构的情况下的行为,(2)无视固有的数据集属性,例如测试的非独立性案例和(3)无视指标应反映的实际生物医学领域的兴趣。该动态文档的目的是说明图像分析领域通常应用的性能指标的重要局限性。在这种情况下,它重点介绍了可以用作图像级分类,语义分割,实例分割或对象检测任务的生物医学图像分析问题。当前版本是基于由全球60多家机构的国际图像分析专家进行的关于指标的Delphi流程。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
When robots learn reward functions using high capacity models that take raw state directly as input, they need to both learn a representation for what matters in the task -- the task ``features" -- as well as how to combine these features into a single objective. If they try to do both at once from input designed to teach the full reward function, it is easy to end up with a representation that contains spurious correlations in the data, which fails to generalize to new settings. Instead, our ultimate goal is to enable robots to identify and isolate the causal features that people actually care about and use when they represent states and behavior. Our idea is that we can tune into this representation by asking users what behaviors they consider similar: behaviors will be similar if the features that matter are similar, even if low-level behavior is different; conversely, behaviors will be different if even one of the features that matter differs. This, in turn, is what enables the robot to disambiguate between what needs to go into the representation versus what is spurious, as well as what aspects of behavior can be compressed together versus not. The notion of learning representations based on similarity has a nice parallel in contrastive learning, a self-supervised representation learning technique that maps visually similar data points to similar embeddings, where similarity is defined by a designer through data augmentation heuristics. By contrast, in order to learn the representations that people use, so we can learn their preferences and objectives, we use their definition of similarity. In simulation as well as in a user study, we show that learning through such similarity queries leads to representations that, while far from perfect, are indeed more generalizable than self-supervised and task-input alternatives.
translated by 谷歌翻译
and widely used information measurement metric, particularly popularized for SSVEP- based Brain-Computer (BCI) interfaces. By combining speed and accuracy into a single-valued parameter, this metric aids in the evaluation and comparison of various target identification algorithms across different BCI communities. To accurately depict performance and inspire an end-to-end design for futuristic BCI designs, a more thorough examination and definition of ITR is therefore required. We model the symbiotic communication medium, hosted by the retinogeniculate visual pathway, as a discrete memoryless channel and use the modified capacity expressions to redefine the ITR. We use graph theory to characterize the relationship between the asymmetry of the transition statistics and the ITR gain with the new definition, leading to potential bounds on data rate performance. On two well-known SSVEP datasets, we compared two cutting-edge target identification methods. Results indicate that the induced DM channel asymmetry has a greater impact on the actual perceived ITR than the change in input distribution. Moreover, it is demonstrated that the ITR gain under the new definition is inversely correlated with the asymmetry in the channel transition statistics. Individual input customizations are further shown to yield perceived ITR performance improvements. An algorithm is proposed to find the capacity of binary classification and further discussions are given to extend such results to ensemble techniques.We anticipate that the results of our study will contribute to the characterization of the highly dynamic BCI channel capacities, performance thresholds, and improved BCI stimulus designs for a tighter symbiosis between the human brain and computer systems while enhancing the efficiency of the underlying communication resources.
translated by 谷歌翻译